EN FR
EN FR
STARS - 2019
Overall Objectives
New Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography
Overall Objectives
New Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography


Section: New Results

Introduction

This year Stars has proposed new results related to its three main research axes: (i) perception for activity recognition, (ii) action recognition and (iii) semantic activity recognition.

Perception for Activity Recognition

Participants : François Brémond, Juan Diego Gonzales Zuniga, Abhijit Das, Antitza Dantcheva, Ujjwal Ujjwal, Srijan Das, David Anghelone, Monique Thonnat.

The new results for perception for activity recognition are:

  • Handling the Speed-Accuracy Trade-off in Deep Learning based Pedestrian Detection (see 6.2)

  • Deep Learning applied on Embedded Systems for People Tracking (see 6.3)

  • Partition and Reunion: A Two-Branch Neural Network for Vehicle Re- identification (see 6.4)

  • Improving Face Sketch Recognition via Adversarial Sketch-Photo Transformation (see 6.5)

  • Impact and Detection of Facial Beautification in Face Recognition: An Overview (see 6.6)

  • Computer Vision and Deep Learning applied to Facial analysis in the invisible spectra (see 6.7)

Action Recognition

Participants : François Brémond, Juan Diego Gonzales Zuniga, Abhijit Das, Antitza Dantcheva, Ujjwal Ujjwal, Srijan Das, Monique Thonnat.

The new results for action recognition are:

  • ImaGINator: Conditional Spatio-Temporal GAN for Video Generation (see 6.8)

  • Characterizing the State of Apathy with Facial Expression and Motion Analysis (see 6.9)

  • Dual-threshold Based Local Patch Construction Method for Manifold Approximation And Its Application to Facial Expression Analysis (see 6.10)

  • A Weakly Supervised Learning Technique for Classifying Facial Expressions (see 6.11)

  • Robust Remote Heart Rate Estimation from Face Utilizing Spatial- temporal Attention (see 6.12)

  • Quantified Analysis for Epileptic Seizure Videos (see 6.13)

  • Toyota Smarthome: Real-World Activities of Daily Living (see 6.15)

  • Looking deeper into Time for Activities of Daily Living Recognition (see 6.15.1)

  • Self-Attention Temporal Convolutional Network for Long-Term Daily Living Activity Detection (see 6.16)

Semantic Activity Recognition

Participants : François Brémond, Elisabetta de Maria, Antitza Dantcheva, Srijan Das, Abhijit Das, Daniel Gaffé, Thibaud L'Yvonnet, Sabine Moisan, Jean-Paul Rigault, Annie Ressouche, Ines Sarray, Yaohui Wang, S L Happy, Alexandra König, Philippe Robert, Monique Thonnat.

For this research axis, the contributions are:

  • DeepSpa Project (see 6.17)

  • Store Connect and Solitaria (see 6.18)

  • Synchronous Approach to Activity Recognition (see 6.19)

  • Probabilistic Activity Modeling (see 6.20)